AI Ethics and the Philosophy of Logic

Abstract

This tutorial will present a philosophical discussion on what is an appropriate logical foundation for ethical AI. The emphasis of the tutorial will not be so much on morality itself, i.e., on what are the moral values for AI Ethics, but on how systems would reason about morality and how their design would accommodate an ethical behaviour. The problem is quite deep and challenges the conventional computational model of a strict logical form, even when this is probabilistic where strictly maximal probability is sought. Should systems be built with hard norms that are applied under strict logical reasoning or should they operate under a more flexible logical form of normative guidance? Is the design of ethical AI systems a one-shot process that occurs at the beginning of the “life” of a system or should the design of the system be such that this is in continuous learning mode of learning and adjusting itself to operate ethically?

Motivation

It is argued that AI is moving too fast with little attention on its wider implications on the society. Governments and society at large are concerned about the ethical implications of AI and are trying to find ways to address these concerns, mainly through regulation such as the recent EU AI Act. Indeed, the thrust of the discussion on AI Ethics ultimately centers around the question on how best to regulate AI. But regulation is a post-process, in the sense that the emphasis is on the a-posteriori compliance of AI systems with respect to the regulatory framework that applies. This point of view is one that essentially takes ethics in AI as a metaphysical concern.

An alternative to this regulatory perspective on AI Ethics is that of “Ethics by Design” where the ethical issue is addressed alongside that of “performance” of a system, within the same process of design and development of the system. This poses a serious challenge of how to reconcile these two requirements of performance and ethicacy of AI systems. The problem is quite deep and touches on the underlying logical form of the design of the systems. It challenges the conventional strict logical form of computation, even when this is probabilistic where strictly maximal probability is sought. It raises the question of what is an appropriate logical formulation of the requirement of ethical compliance of systems? Is it one in terms of hard applied norms imposed on the operation of a system? Norms, whose validity is absolute and whose logical conclusions are also demonstratively necessary.

Format of tutorial

After a short review of the logic of ethics in philosophy from Aristotle and Kant we will consider and address the following topics within the context of recent developments of AI:

  1. What is a good or acceptable ethical operation of a system? Perfection or Sensitivity to special cases and Adaptability.
  2. How are ethical norms formulated and represented within an AI system? Modal Logic vs Argumentation Logic
  3. Strict Norm Compliance or Normative Guidance? Absolute Compliance vs Flexible Leniency
  4. Ethics via Optimal Rationality or Dialectic Rationality of satisficing sustainable decisions
  5. One shot Ethics by Rational Design or Habitual Ethicacy from experience
  6. What is an appropriate form of Ethical Data for AI to Learn how to “live/operate” ethically
  7. How does explainability contribute to the ethicacy of AI systems? What are good quality ethical characteristics of explanations?

Short CV of Tutor

Antonis Kakas

Antonis C. Kakas is a Professor at the Department of Computer Science of the University of Cyprus. He obtained his Ph.D. in Theoretical Physics from Imperial College London in 1984. His interest in Computing and AI started in 1989 under the group of Professor Kowalski. Since then, his research has concentrated on computational logic in AI with particular interest in argumentation, abduction and induction and their application to machine learning and cognitive systems. Currently, he is working on the development of a new framework of Cognitive Programming as an environment for developing Human-centric AI systems that can be naturally used by developers and human users at large. He was the National Contact Point for Cyprus in the flagship EU project on AI, AI4EU. He has recently co-founded a start-up company in Paris, called Argument Theory, which offers solutions to real-life application decision taking problems based on AI Argumentation Technology.

Course material and reading

Here are two initial relevant papers. Tutorial notes and further reading will be posted here in due course.

  1. Kakas, A.C. (2025): The pollution of AI, Communications of the ACM, May 2025.

  2. Dietz, E., Kakas, A., & Michael, L. (2022). Argumentation: a calculus for human-centric AI. Frontiers in Artificial Intelligence, 5, 955579.